<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
      <title>Tagged with interactive art - Processing 2.x and 3.x Forum</title>
      <link>https://forum.processing.org/two/discussions/tagged/feed.rss?Tag=interactive+art</link>
      <pubDate>Sun, 08 Aug 2021 20:44:19 +0000</pubDate>
         <description>Tagged with interactive art - Processing 2.x and 3.x Forum</description>
   <language>en-CA</language>
   <atom:link href="/two/discussions/taggedinteractive+art/feed.rss" rel="self" type="application/rss+xml" />
   <item>
      <title>Interactive art installation powered by Processing</title>
      <link>https://forum.processing.org/two/discussion/24530/interactive-art-installation-powered-by-processing</link>
      <pubDate>Fri, 13 Oct 2017 16:34:32 +0000</pubDate>
      <dc:creator>alexmiller</dc:creator>
      <guid isPermaLink="false">24530@/two/discussions</guid>
      <description><![CDATA[<p>Me and a friend created an interactive installation that uses Processing to project generative graphics onto a geometric grid structure.</p>

<p><img src="http://spacefiller.space/algoplex2/promo1_small.jpg" alt="" /></p>

<p>More info (and a video) <a rel="nofollow" href="http://spacefiller.space/algoplex2/index.html">here</a>!</p>
]]></description>
   </item>
   <item>
      <title>marine: a framework for artistic interactivity experimentation</title>
      <link>https://forum.processing.org/two/discussion/22157/marine-a-framework-for-artistic-interactivity-experimentation</link>
      <pubDate>Sun, 23 Apr 2017 18:52:27 +0000</pubDate>
      <dc:creator>ricardoscholz</dc:creator>
      <guid isPermaLink="false">22157@/two/discussions</guid>
      <description><![CDATA[<p>Hi!</p>

<p>I've been working for a while on a tool for dancers to experiment with movement sensors, sound and light control, and live projection.</p>

<p>So, as the whole projection part relies on <strong>Processing 3.2.1</strong>, and third party developers can easily build plug-ins (almost the same way they write Processing sketches) which can be imported from the UI, I thought it would be appropriate to share it here, just in case someone is looking for something for his/her interactive artwork, kinect based game development, interactive musical interface development, or anything...</p>

<p>www.marineframework.org</p>

<p>I'm still improving the core module and UI module, so new ideas and suggestions are always welcome!</p>

<p>Hope you enjoy it!</p>
]]></description>
   </item>
   <item>
      <title>Call for Works</title>
      <link>https://forum.processing.org/two/discussion/21307/call-for-works</link>
      <pubDate>Fri, 10 Mar 2017 11:20:22 +0000</pubDate>
      <dc:creator>dbisig</dc:creator>
      <guid isPermaLink="false">21307@/two/discussions</guid>
      <description><![CDATA[<p>Immersive Lab
An interactive audio-visual immersive space
for artistic experimentation and experiences</p>

<ul>
<li>Call for Works -</li>
</ul>

<p>Residency opportunity between Mai to June 2017 at the Institute for Computer Music and Sound Technology of the Zurich University of the Arts.</p>

<p>We are looking for European artist(s) interested in creating a new work in and for the Immersive Lab. The residency offers the opportunity to realise a real-time interactive audio-visual piece that leverages the unique combination of 360 degree sound &amp; video projection and touch-based interaction.</p>

<p>The main goal of the Immersive Lab lies in the presentation of abstract, algorithmic, generative media works that place a strong focus on interaction, immersion and perception. However, we don't support works that represent pure game concepts or fixed-media pieces. Other than that, there are no limitations with respect to the content of the proposed work.</p>

<p>The artist(s) should possess both the technical and artistic expertise in the media-arts domains of audio, video/graphics and interaction. If necessary and artistically compelling, we can also accommodate a team of two artists, whose combined expertise meets these requirements (e.g. a collaboration between sound and visual artists). Our team will provide conceptual guidance and technical support but will not aid in content creation or provide software programming services.</p>

<p>The installation software runs under OS X and is built in a modular fashion. This provides the flexibility to easily integrate a variety of real-time software components for content generation. The touch detection system provides tracking information via OSC in the 'tuio' format. The projection on the cylinder via a video-mapping system is based on the Syphon context-sharing technique. Multichannel audio is directly output to the installation's audio interface.</p>

<p>The goal of the residency is to deliver a finished piece, which becomes a permanent part of the catalogue of works of the Immersive Lab and can be shown in public exhibitions. The piece should run unattended and be maintained in running condition for a period of at least two years.</p>

<p>Ideally, the residency has a duration of two weeks and falls into the timespan between 1. Mai and 30. June 2017.
Accommodation and per diems are covered. All other expenses must to be covered by the artist(s).</p>

<p>Applicants should hand in the submission form, containing a CV, an artist statement and a detailed proposal for work no later than the 31. March 2017. Notification of selection will be given on 14. April.</p>

<p>For more information concerning the Immersive Lab see http//immersivelab.zhdk.ch.</p>

<p>Description:</p>

<p>The 'Immersive Lab' is a media space that integrates panoramic video, surround audio with full touch interaction where the entire screen serves as touch surface. The 'Immersive Lab' provides a platform for a catalogue of artistic works that are specifically tailored to the unique situation that this configuration offers. These works articulate the relationship between immersive media and direct interaction. It functions both as a space for experimental learning and creation and as a permanent audiovisual installation for the general public, showing finished pieces in a self-explanatory way.</p>

<p>This installation as a platform is the fruit of several years of investigation, artistic creation, serving for research projects, workshops and residencies. The term Immersion is used in a broader sense. Apart from spatial envelopment by image and sound, additional levels of immersions are generated for the visitors: they enter into a dedicated physical space, direct tactile interaction on the panoramic surface enhances their personal engagement, and finally within the shared space arise group behaviour and social interactions. Such an extended form of immersion provides a multi-faceted experience.</p>

<p>The compositions can be collaboratively created and combine visual and sonic material with generative and algorithmic methods. The artistic approach focuses on real-time pieces that react to visitor interaction and that take advantage of the panoramic nature of the installation.</p>

<p>Different forms of engagement are possible within the installation. The audience can freely explore the works and experience different types of perceptions. Artists can experiment with the development of compositional strategies for working with different senses and artistic domains. The installation exposes foundational aspects of immersion such as spatial and multi-sensory perception, which provide interesting topics for investigation.</p>

<p>Work in the 'Immersive Lab' happens in different phases, activities, and addresses different people. In a teaching context, in general, any student can visit the lab in guided tours. Students majoring in electronic music or media arts, however, are invited to actively learn by exploring the inner workings if existing pieces. Artists and advanced students have the opportunity to become involved more intensely by creating entirely new pieces. For this, the ICST offers to share its experience, methods, and tools for development and realization of ideas for this particular media space. Finally, in the exhibition context, general audiences are invited to experience the catalogue of works.</p>
]]></description>
   </item>
   <item>
      <title>How to use ultrasonic sensor to trigger a video ? (Arduino+processing)</title>
      <link>https://forum.processing.org/two/discussion/19293/how-to-use-ultrasonic-sensor-to-trigger-a-video-arduino-processing</link>
      <pubDate>Fri, 25 Nov 2016 18:17:22 +0000</pubDate>
      <dc:creator>eliseber</dc:creator>
      <guid isPermaLink="false">19293@/two/discussions</guid>
      <description><![CDATA[<p>Hey ! 
First time I'm using Arduino and Processing at the same time and I'm completely lost ! 
I'm designing an interactive installation, and I need the trigger the position in a video with the distance : as we get closer to the sensor, the video will move. 
I've set up a code for Arduino which gives the distance. 
For processing, I found a code which triggers the position with the webcam (and the size of the head of the spectator, as he gets closer, his head become bigger, and the position moves forward). Do you have any idea/ thoughts about how I can modify this code ? It's the same idea, with arduino instead of the webcam but I don't know where to start...</p>

<h2>Thank you very much !</h2>

<p>Arduino :</p>

<pre><code>#include &lt;NewPing.h&gt;
#include &lt;Servo.h&gt;


#define TRIGGER_PIN  12  // Arduino pin tied to trigger pin on the ultrasonic sensor.
#define ECHO_PIN     11  // Arduino pin tied to echo pin on the ultrasonic sensor.
#define MAX_DISTANCE 200 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm.

int LEDpin = 13;
Servo myservo;
int val;

NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance.

void setup() {
  Serial.begin(9600); // Open serial monitor at 115200 baud to see ping results.
  pinMode(LEDpin, OUTPUT);
  myservo.attach(9);// attaches servo to pin 9

}

void loop() {
  delay(400);                      // Wait 50ms between pings (about 20 pings/sec). 29ms should be the shortest delay between pings.
  unsigned int uS = sonar.ping(); // Send ping, get ping time in microseconds (uS).
  //Serial.print("Ping: ");
  Serial.println(uS / US_ROUNDTRIP_CM); // Convert ping time to distance in cm and print result (0 = outside set distance range)
  //Serial.println("cm");
  //if(uS / US_ROUNDTRIP_CM &lt; 100) {digitalWrite(LEDpin, HIGH);}
  //else if (uS / US_ROUNDTRIP_CM &gt; 100) {digitalWrite(LEDpin, LOW);}
 // else if (uS / US_ROUNDTRIP_CM &gt; 100) {digitalWrite(LEDpin, LOW);}
  //delay (200);

  val = (uS / US_ROUNDTRIP_CM);
  val = map(val, 0, 172, 15, 180);
  myservo.write(val);
  delay (150);


}
</code></pre>

<hr />

<p>PROCESSING</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;
//-------------------------------------------------
/*


*/
//-------------------------------------------------
Capture video;
OpenCV opencv;
//---
Movie monSuperFilm;// déclaration du film
float positionDuFilmEnSecondes;//position ds le film
Integer surfaceVisages,surfaceMin,surfaceMax;
//-------------------------------------------------
/*


*/
//-------------------------------------------------
void setup() {
  //----------------
  size(1080, 720);//dimensions de votre film en pixels; mettre la définition de votre film
  //-----------------
  // partie video cam
  //-----------------
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
  video.start();
  //-----------------
  // partie film
  //-----------------
    monSuperFilm = new Movie(this,"pattern1080-6low-1.mov");// chargement du film ; mettre le nom de votre film qui est dans le dossier data
    monSuperFilm.loop();
  //-----------------
  // partie visages
  //-----------------
  // plus la surface du visage est grande dans l'image retournée par la caméra du mac et plus on est près
  // le programme cumule les surfaces de tous les visages capturés par la caméra; à essayer à plusieurs devant le mac !
    surfaceMin=400;// min 0 //surface du visage sur la camera qui correspond au début du film
    surfaceMax=1500;// max 640x480=307200 //surface du visage sur la camera qui correspond à la fin du film
}
//-------------------------------------------------
/*


*/
//-------------------------------------------------
void draw() {
  //background(255);
  //scale(2);
  opencv.loadImage(video);

  //image(video, 0, 0 );

  //noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);
//---------------------------------------------
// calcul de la surface occupée par les visages
//---------------------------------------------
surfaceVisages=0;
  for (int i = 0; i &lt; faces.length; i++) {
    //println(faces[i].x + "," + faces[i].y);
    //rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    surfaceVisages=surfaceVisages+faces[i].width*faces[i].height;
  }
  fill(0);
  text(str(surfaceMin)+" / "+str(surfaceVisages)+" / "+str(surfaceMax),10,50);
   monSuperFilm.read();//on lit l'image du film 
  positionDuFilmEnSecondes=map(surfaceVisages,surfaceMin,surfaceMax,0,monSuperFilm.duration()); 
  monSuperFilm.jump(positionDuFilmEnSecondes);// on se déplace dans le film
  image(monSuperFilm, 0, 0);//on affiche l'image courante du film

}
//-------------------------------------------------
/*


*/
//-------------------------------------------------
void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Interactive video with mouse function?</title>
      <link>https://forum.processing.org/two/discussion/17427/interactive-video-with-mouse-function</link>
      <pubDate>Wed, 06 Jul 2016 02:04:19 +0000</pubDate>
      <dc:creator>nube101</dc:creator>
      <guid isPermaLink="false">17427@/two/discussions</guid>
      <description><![CDATA[<p>Looking to find a function code in processing that will not only loop a series of short videos, but also be affected when a mouse scrolls over the media from one side to the other horizontally. Speed of the mouse movement then translates to the speed of the video. I'm trying to find a way to do this with a distance sensor but I think keeping a mouse will be more simple. Would really appreciate any help!</p>
]]></description>
   </item>
   <item>
      <title>My task (animated text)</title>
      <link>https://forum.processing.org/two/discussion/17404/my-task-animated-text</link>
      <pubDate>Sun, 03 Jul 2016 18:45:21 +0000</pubDate>
      <dc:creator>ambush762</dc:creator>
      <guid isPermaLink="false">17404@/two/discussions</guid>
      <description><![CDATA[<p>Hi. At my university I received a task to make interactive animation of the text in processing, using three different letters and any font. I would be glad for possible sketches. there are no ideas</p>

<p>// it's my failure try:/</p>

<pre><code>        PFont font;


        void setup () {
          size (600, 600);
          background (0);
          smooth (10);
          noStroke ();
          font=loadFont ("HelveticaNeue-BlackCond-48.vlw");
          textFont (font);
          strokeWeight(1);
          stroke(250, 100);
        }

        void draw () {
          text("G", pmouseX+30, pmouseY);
          text("M", pmouseX, pmouseY);
          text("F", pmouseX, pmouseY-30);
          fill(random(0, 255));
        }
</code></pre>
]]></description>
   </item>
   <item>
      <title>«Non-Finito» generative sculpture</title>
      <link>https://forum.processing.org/two/discussion/17122/non-finito-generative-sculpture</link>
      <pubDate>Mon, 13 Jun 2016 10:37:21 +0000</pubDate>
      <dc:creator>alexr4</dc:creator>
      <guid isPermaLink="false">17122@/two/discussions</guid>
      <description><![CDATA[<p>Hi everyone.</p>

<p>I would like to share with you our last piece "Non-Finito", an interactive installation creating a 3D marble sculpture from users who enter into a kinect field of view.  We use the kinect v2 library and GLSL Shader for materials and generative marble texture.</p>

<p>We hope you wil like it.</p>

<p>Non-Finito :
<a rel="nofollow" href="http://www.bonjour-lab.com/project/non-finito/">http://www.bonjour-lab.com/project/non-finito/</a></p>

<p>You wan also see some making of still here : <a rel="nofollow" href="https://www.flickr.com/photos/alexr4/sets/72157666351804994">https://www.flickr.com/photos/alexr4/sets/72157666351804994</a></p>

<p><img src="http://www.bonjour-lab.com/wp-content/uploads/2016/06/06_non_finito.jpg" alt="" /></p>
]]></description>
   </item>
   <item>
      <title>Learn Computer Vision this summer in Berlin!</title>
      <link>https://forum.processing.org/two/discussion/15960/learn-computer-vision-this-summer-in-berlin</link>
      <pubDate>Mon, 11 Apr 2016 08:46:52 +0000</pubDate>
      <dc:creator>Irina_Spicaka</dc:creator>
      <guid isPermaLink="false">15960@/two/discussions</guid>
      <description><![CDATA[<p>A great opportunity for media artists, visual designers to <strong>learn Computer Vision from 6 June to 1 July</strong> in Berlin. During 4 weeks course participants will learn fundamentals of computer vision and image processing using OpenCv and openFrameworks, 3D cameras and related algorithms, as well as will develop and exhibit individual art projects.</p>

<p>The course approaches computer vision from an introductory level and basic experience in any programming language/platform is encouraged.</p>

<p>An application period is open now and it is suggested to apply sooner as maximum amount of participants is only 15! <strong>Find more about the course <a rel="nofollow" href="http://schoolofma.org/see-or-be-seen/">here</a>.</strong></p>

<p>Instructor - Chris Sugrue.</p>

<p>Organised by <strong>School of Machines - Making &amp; Make Believe</strong>.</p>
]]></description>
   </item>
   </channel>
</rss>